Search Results: "pino"

30 April 2015

Eddy Petrișor: Linksys NSLU2 JTAG help requested

Some time ago I have embarked on a jurney to install NetBSD on one of my two NSLU2-s. I have ran into all sorts of hurdles and problems which I finally managed to overcome, except one:

The NSLU I am using has a standard 20 pin ARM JTAG connector attached to it (as per this page http://www.nslu2-linux.org/wiki/Info/PinoutOfJTAGPort, only TDI, TDO, TMS, TCK, Vref and GND signals), but, although the chip is identified, I am unable to halt the CPU:
    $ openocd -f interface/ftdi/olimex-arm-usb-ocd.cfg -f board/linksys_nslu2.cfg
    Open On-Chip Debugger 0.8.0 (2015-04-14-09:12)
    Licensed under GNU GPL v2
    For bug reports, read
    http://openocd.sourceforge.net/doc/doxygen/bugs.html
    Info : only one transport option; autoselect 'jtag'
    adapter speed: 300 kHz
    Info : ixp42x.cpu: hardware has 2 breakpoints and 2 watchpoints
    0
    Info : clock speed 300 kHz
    Info : JTAG tap: ixp42x.cpu tap/device found: 0x29277013 (mfg: 0x009,
    part: 0x9277, ver: 0x2)
    [..]
    $ telnet localhost 4444
    Trying ::1...
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    Open On-Chip Debugger
    > halt
    target was in unknown state when halt was requested
    in procedure 'halt'
    > poll
    background polling: on
    TAP: ixp42x.cpu (enabled)
    target state: unknown
My main goal is to make sure I can flash the device via JTAG, in case I break it, but it would be ideal if I could use the JTAG to single step through the code.

I have found that other people have managed to flash the device via JTAG without the other signals, and some have even changed the bootloader (and had JTAG confirmed as backup solution), so I am stuck.

So if anyone can give some insights into ixp42x / Xscale / NSLU2 specific JTAG issues or hints regarding this issue on OpenOCD or other such tool, I would be really grateful.


Note: I have made a hacked second stage Apex bootloader to laod the NetBSD image via TFTP, but the default RedBoot sequence 'boot; exec 0x01d00000' should be 'boot; go 0x01d00000' for NetBSD to work, so I am considering changing the RedBoot partition to alter that command. The gory details can be summed as my Apex is calling RedBoot functions to be network enabled (because Intel's NPE current code is not working on Apex) and I have tested this to work with go, but not with exec.

22 April 2015

Tollef Fog Heen: Temperature monitoring using a Beaglebone Black and 1-wire

I've had a half-broken temperature monitoring setup at home for quite some time. It started out with a Atom-based NAS, a USB-serial adapter and a passive 1-wire adapter. It sometimes worked, then stopped working, then started when poked with a stick. Later, the NAS was moved under the stairs and I put a Beaglebone Black in its old place. The temperature monitoring thereafter never really worked, but I didn't have the time to fix it. Over the last few days, I've managed to get it working again, of course by replacing nearly all the existing components. I'm using the DS18B20 sensors. They're about USD 1 a piece on Ebay (when buying small quantities) and seems to work quite ok. My first task was to address the reliability problems: Dropouts and really poor performance. I thought the passive adapter was problematic, in particular with the wire lengths I'm using and I therefore wanted to replace it with something else. The BBB has GPIO support, and various blog posts suggested using that. However, I'm running Debian on my BBB which doesn't have support for DTB overrides, so I needed to patch the kernel DTB. (Apparently, DTB overrides are landing upstream, but obviously not in time for Jessie.) I've never even looked at Device Tree before, but the structure was reasonably simple and with a sample override from bonebrews it was easy enough to come up with my patch. This uses pin 11 (yes, 11, not 13, read the bonebrews article for explanation on the numbering) on the P8 block. This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs. Once this works, you need to compile the w1-gpio kernel module, since Debian hasn't yet enabled that. Run make menuconfig, find it under "Device drivers", "1-wire", "1-wire bus master", build it as a module. I then had to build a full kernel to get the symversions right, then build the modules. I think there is or should be an easier way to do that, but as I cross-built it on a fast AMD64 machine, I didn't investigate too much. Insmod-ing w1-gpio then works, but for me, it failed to detect any sensors. Reading the data sheet, it looked like a pull-up resistor on the data line was needed. I had enabled the internal pull-up, but apparently that wasn't enough, so I added a 4.7kOhm resistor between pin 3 (VDD_3V3) on P9 and pin (GPIO_45) on P8. With that in place, my sensors showed up in /sys/bus/w1/devices and you can read the values using cat. In my case, I wanted the data to go into collectd and then to graphite. I first tried using an Exec plugin, but never got it to work properly. Using a [python plugin] worked much better and my graphite installation is now showing me temperatures. Now I just need to add more probes around the house. The most useful references were In addition, various searches for DS18B20 pinout and similar, of course.

9 December 2014

Joey Hess: podcasts that don't suck, 2014 edition

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars. PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

10 September 2014

Ian Donnelly: New Release: Elektra 0.8.8

Hi Everybody! Great news! I am very happy to announce that we have reached a new milestone for Elektra and released a new version, 0.8.8! This release comes right on the tail of the 0.8.7 release and it might just be our biggest release yet! We already have a great article covering all the changes from the previous release on our News documentation on GitHub. I just wanted to focus on a few of those changes on this blog, especially the ones that pertain to my Google Summer of Code Project. First of all, Felix has worked to greatly improve the ini plug-in. This is the plug-in I used in my technical demo for mounting Samba s smb.conf file. It now works even better with complex ini files such as smb.conf which means the automatic merging of files like smb.conf is even better now! That really goes to show one of the greatest strengths of the design of Elektra. Just by improving plug-ins, all the functions of Elektra can improve as well. The merge code was not changed in this release, yet because of an updated plug-in, the merge has improved. Secondly, there have been some good improvements to the kdb command-line tool. Many of these improvements were used in my technical demo, but now they are actually a part of release (and a little more refined from then). We added a new command called kdb remount which allows a user (or script) to mount a file to the Elektra Key Database using an existing backend. An example of this command is:
kdb remount [new filename] [new path] [existing mountpoint]

This command mounts the new file to the new path in the Key Database using an existing backend. This works with the conffile merging by allowing us to mount the various versions of the conffile without having to specify which backend to use (it will use the same backend as the currently used conffile). Additionally, the umount command was updated to allow users to umount using the current mountpath (allowing commands such as kdb umount system/smb.conf) as opposed to backends. Moreover, we added an option to the kdb import command to specify a merge strategy using thing -s option. Now you can import a file into the Key Database and merge the content of that file with the current Keys in the Database. Thirdly, we added some new scripts to Elektra to help with the ucf integration. These scripts were used in my technical demo, but now they are part of the release. elektra-mount and elektra-umount are wrappers for the commands kdb mount and kdb umount respectively. They are designed to be used in debian package scripts and are adapted for easier use than the generic commands. For instance, running elektra-mount will check to see if a file is already mounted at that location in the Key Database. Similarly, elektra-umount will not produce an error if the file was already unmounted. This is because maintainer scripts can be run multiple times in a row and producing an error will stop dpkg even when it shouldn t. Additionally, we added a script called elektra-merge which can be used as a method for ucf to merge configuration files. This script acts as a liaison between ucf and elektra allowing automatic merges to be done by ucf using Elektra s merge features in a seamless manner. For information of how these scripts work, check out my tutorial on integrating elektra-merge into a debian package. The last bit of news I would like to share is the great progress of the Debian package. Thanks to Pino Toscano, version 0.8.7-4 of Elektra is now available in the Debian testing repo! This is great news as we are now that much closer to replacing the outdated Elektra 0.7 versions that are currently the latest versions of Elektra in the stable repo. Once the 0.8.X versions of Elektra make it to stable it will be much easier for us to keep the latest versions of Elektra in Debian, and that s key to allowing Elektra to help improve users lives. You can download the release from:
http://www.markus-raab.org/ftp/elektra/releases/elektra-0.8.8.tar.gz size: 1644441
md5sum: fe11c6704b0032bdde2d0c8fa5e1c7e3
sha1: 16e43c63cd6d62b9fce82cb0a33288c390e39d12
sha256: ae75873966f4b5b5300ef5e5de5816542af50f35809f602847136a8cb21104e2 And the API-Documentation can be found here: http://doc.libelektra.org/api/0.8.8/html/ Hope you enjoy the new release! Sincerely,
Ian S. Donnelly

22 August 2014

Ian Donnelly: Wrapping Up

Hi Everybody! I have been keeping very busy on this blog the past few days with some very exciting updates! Everything is now in place for my Google Summer of Code project! Elektra now has the ability to merge KeySets, ucf has been patched to allow custom merge commands, we included a new elektra-merge script to use in conjunction with the new ucf command, and we wrote a great tutorial on how to use Elektra to merge configuration files in any package using ucf. There are few things I have missed updating you all on or are still wrapping up. First of all, I would like to address the Debian packages. While Elektra includes everything needed for my Google Summer of Code Project, it must be built from source right now. Unfortunately, with the rapid development Elektra has seen the past few months, we did not pay enough attention to the Debian packages and they became dusty and riddles with bugs. Fortunately, we have a solution and his name is Pino Toscano. Pino has, very graciously, agreed to help us fix our Debian packages back into shape. If you wish to see the current progress of the packages, you can check his repo. Pino has already made fantastic progress creating the Debian packages. I will post here when packages are all fixed up and the latest versions of Elektra are into the Debian repo. Some great news that we just received is that Elektra 0.8.7 has just been accepted into Debian unstable! This is huge progress for our team and means that users can now download our packages directly from Debian s unstable repo and test out these new features. Obviously, the normal caveats apply for any packages in unstable but this is still an amazing piece of news and I would like to thank Pino again for all the support he has provided. Another worthy piece of news that I unfortunately haven t had time to cover is that thanks to Felix Berlakovich Elektra has a new plug-in called ini. In case you couldn t guess, this new plug-in is used to mount ini files and the good news is that it is a vast improvement on our older simpleini plug-in. While it is still a work in progress, this new plug-in is much more powerful than simpleini. The new plug-in supports section and comments and works with many more ini files than the old plug-in. You may have noticed that I used this new plug-in to mount smb.conf in my technical demo of Samba using Elektra from configuration merging. Since smb.conf follows the same syntax as an ini file this new plug-in works great for mounting this file into the Elektra Key Database. I hope my blogs so far have been very informative and helpful. Sincerely,
Ian S. Donnelly

4 August 2014

Ian Donnelly: New Release: Elektra 0.8.7

Hi Everybody! I am very proud to inform you all that Elektra has just shipped a new release, version 0.8.7, with many great features and fixes! First of all, I want to let you all know that a lot of work from my Google Summer of Code Project has made its way into this release. Elektra now includes support for a three way merge of KeySets! A special and sincere thanks goes out to Felix Berlakovich for helping me test this new merge feature and adding some great features to allow for different merge strategies and dealing with meta keys. You can try out the new merge features using the kdb merge command or by using the Elektra API. There is still work to be done with Merging and improving documentation (also look for some posts on this blog soon about the feature). Additionally, thanks to Felix, we have technical previews for some new plug-ins. The new plug-ins are keytometa and ini. In short the keytometa plugin allows to convert normal keys to meta keys during the get operation and reverting this conversion during the set operation. The ini plugin is basically a rewrite of the simpleini plugin and makes use of the inih library. Also there have been many improvements made to the glob plug-in. He even found some time to add a new script for bash tab completion which is located under scripts/kdb-bash-completion. To use it on debian just copy it to /etc/bash_completion.d/ and make sure it is executable. Moreover, we fixed a lot of things with this newest release. Pino Toscano has been working on fixing up the Debian packages for Elektra but he has also fixed many other things along the way including fixing a lot of spelling errors, simplifying the RPATH setting, improvements to respecting $HOME and $TMPDIR, and improvements to some test cases. The kdb tool now does a better job of checking for subfolders that aren t allow and it now makes sure to output warnings before errors so errors can more easily be seen. We have also improved some tests for kdb tool and some plugins as well as fixed compiler warnings on clang and gcc 4.9. We also made some fixes to kdb import and export for some storage plugins and fixed some bugs so that kdb run_all now works flawlessly. There have also been a few tweaks to the API for this release, specifically in the C++ bindings. There is now a delMeta() function for C++. The reason for this is that contrary to the C API, calling

key.setMeta("metaname", NULL)

does not delete the metadata, but stores the value 0 . Additionally, we changed the arguments for isBelow, isDirectBelow, and isBelowSame for the C++ binding to be easier to understand and be more natural to use. Before this change, the C++ binding closely mirrored the C API which lead to an unintuitive behaviour. Before the change the API did the following:

Key ( user/config/key/below ).isBelow (Key ( user/config )) == false
Key ( user/config/key/below ).isBelow (Key ( user/config/key/below/deeper )) == true

That is because the first argument in the C API is the object itself in the C++ API.
The attribute of being below the key in question (the object) refers to the second key in the C API.
While this makes some sense for the C API, it definitely does not for the C++ API. Now the API behaves as follows (as intuitively expected):

Key ( user/config/key/below ).isBelow (Key ( user/config )) == true
Key ( user/config/key/below ).isBelow (Key ( user/config/key/below/deeper )) == false

We even had time for a bunch of documentation changes. We now have a tutorial for contextual values to GitHub so developers can start using contextual values with Elektra. We also included a specification for metadata and a better specification for contracts. There is even a little bit of extra news to share. We now use GitHub for active development of Elektra. We have adopted its issue tracker for issues. Also, now pull requests automatically get built by the server to see if the merge would brake the build and whether it passes all the tests. We are also in the process of updating a lot of our documentation and READMEs to use Markdown so they can be viewed easily on GitHub. Also, Raffael Pancheri has been making really great progress on a qt-gui for Elektra. There is still work to be done but it looks great and is coming along nicely. You can download the release now from Markus site: http://www.markus-raab.org/ftp/elektra/releases/elektra-0.8.7.tar.gz size: 1566800
md5sum: 4996df62942791373b192c793d912b4c
sha1: 00887cc8edb3dea1bc110f69ea64f6b700c29402
sha256: 698ebd41d540eb0c6427c17c13a6a0f03eef94655fbd40655c9b42d612ea1c9b Also there are packages already ready for some distributions: There is a lot of ongoing work to fix the Debian packages and I will post about it on this blog when they are good to go! Enjoy the new release!
-Ian S. Donnelly

22 November 2013

Joey Hess: 2013 git-annex user survey

Similar to the yearly git user survey, I am doing a 2013 git-annex user survey. If you use git-annex, please take a few minutes to answer my questions!
Since this blog post is too short, let me also announce a minor spinoff project from git-annex that I have recently released. git-repair is a complement to git fsck that can fix up arbitrarily damaged git repositories. As well as avoiding the need to rm -rf a damaged repository and re-clone, using git-repair can help rescue commits you've made to the damaged repository and not yet pushed out. I've been testing git-repair with evil code that damages git repositories in random ways. It has now successfully repaired tens of thousands of damaged repositories. In the process, I have found some bugs in git itself.

22 July 2013

Matthew Garrett: ARM and firmware specifications

Jon Masters, Chief ARM Architect at Red Hat, recently posted a description of his expectations for baseline arm64 servers. The quick summary is that systems should implement UEFI and ACPI, and any more traditional ARM boot mechanisms should be ignored. This is an interesting departure from the status quo in the ARM world, and it's worth thinking about the benefits and drawbacks of this approach.

It's very easy to build a generic kernel for most x86 systems, since the PC platform is fairly well defined even if not terribly well specified. Where system hardware does vary, it's almost always exposed on an enumerable bus (such as PCI or USB) which allows the OS to bind appropriate drivers. Things are different in the ARM world. Even once you're past the point of different SoC vendors requiring different kernel setup code and drivers, you still have to cope with the fact that system vendors can wire these SoCs up very differently. Hardware is often attached via GPIO lines without any means to enumerate them. The end result is that you've traditionally needed a different kernel for every ARM board. This is viable if you're selling the OS and hardware as a single product, but less viable if there's any desire to run a generic OS on the hardware.

The solution that's been adopted for this in the Linux world is called Device Tree. Device Tree actually has significant history, having been used as the device descriptor format in Open Firmware. Since there was already support for it in the Linux kernel, adapting it for use in ARM devices was straightforward. Device Tree aware devices can pass a descriptor blob to the kernel at startup[1], and devices without that knowledge can have a blob build into the kernel.

So, if this problem is already solved, why the push to move to UEFI and ACPI? This push didn't actually originate in the Linux world - Microsoft mandate that Windows RT devices implement UEFI and ACPI, and were they to launch a Windows ARM server product would probably carry that over. That makes sense for Microsoft, since recent versions of Windows have been x86 only and so have grown all kinds of support for ACPI and UEFI. Supporting Device Tree would require Microsoft to rewrite large parts of Windows, whereas mandating UEFI and ACPI allowed them to reuse most of their existing Windows boot and driver code. As a result, largely at Microsoft's behest, ACPI 5 has grown a range of additional features for describing things like GPIO pinouts and I2C connections. Whatever your weird device layout, you can probably express it via ACPI.

This argument works less well for Linux. Linux already supports Device Tree, whereas it currently doesn't support ACPI or UEFI on ARM[2]. Hardware vendors are already used to working with Device Tree. Moving to UEFI and ACPI has the potential to uncover a range of exciting new kernel issues and vendor bugs. It's not obviously an engineering win.

So how about users? There's an argument that since server vendors are now mostly shipping ACPI and UEFI systems, having ARM support these technologies makes it easier for customers to replace x86 systems with ARM systems. This really doesn't fly for ACPI, which is entirely invisible to the user. There are no standard ACPI entry points for system configuration, and the bits of ACPI that are generically useful (such as configuring system wakeup times) are already abstracted away to a standard interface by the kernel. It's somewhat more compelling for UEFI. UEFI supports a platform-independent bytecode language (EFI Byte Code, or EBC), which means that customers can write their own system management utilities, build them for EBC and then deploy them to their servers without caring about whether they're x86 or ARM. Want a bootloader that'll hit an internal HTTP server in order to determine which system image to deploy, and which works on both x86 and ARM? Straightforward.

Arnd Bergmann has a interesting counterargument. In a nutshell, ARM servers aren't currently aiming for the same market as x86 servers, and as a result customers are unlikely to gain any significant benefit from shared functionality between the two.

So if there's no real benefit to users, and if there's no benefit to kernel developers, what's the point? The main one that springs to mind is that there is a benefit to distributions. Moving to UEFI means that there's a standard mechanism for distributions to interact with the firmware and configure the bootloader. The traditional ARM approach has been for vendors to ship their own version of u-boot. If that's in flash then it's not much of a problem[3], but if it's on disk then you have to ship a range of different bootloaders and know which one to install (and let's not even talk about initial bootstrapping).

This seems like the most compelling argument. UEFI provides a genuine benefit for distributions, and long term it probably provides some benefit to customers. The question is whether that benefit is worth the flux. The same distribution benefit could be gained by simply mandating a minimum set of u-boot functionality, which would seem much more straightforward. The customer benefit is currently unclear.

In the end it'll probably be a market decision. If Red Hat produce an ARM product that has these requirements, and if Suse produce an ARM product that will work with u-boot and Device Tree, it'll be up to vendors to decide whether the additional work to support UEFI/ACPI is worth it in order to be able to sell to customers who want Red Hat. I expect that large vendors like HP and Dell will probably do it, but the smaller ones may not. The customer demand issue is also going to be unclear until we learn whether using UEFI is something that customers actually care about, rather than a theoretical benefit.

Overall, I'm on the fence as to whether a UEFI requirement is going to stick, and I suspect that the ACPI requirement is tilting at windmills. There's nothing stopping vendors from providing a Device Tree blob from UEFI, and I can't think of any benefits they gain from using ACPI instead. Vendor interest in the generic parts of the ACPI spec has been tepid even in the x86 world (the vast majority of ACPI spec updates come from Microsoft and Intel, not any of the system vendors), and I don't see that changing with the introduction of a range of ARM vendors who are already happy with Device Tree.

We'll see. Linux is going to need to gain the support for UEFI and ACPI on ARM in any case, since there's already hardware shipping in that configuration. But with ARM vendors still getting to grips with Device Tree, forcing them to go through another change in how they do things is going to be hard work. Red Hat may be successful in enforcing these requirements at the cost of some vendor unhappiness, or Red Hat may find that their product doesn't boot on most of the available hardware. It's an aggressive gamble, and while it'll be interesting to see how it plays out, I'm not that optimistic.

[1] The blob could be pulled from the firmware, but it's not uncommon for it to be built into u-boot instead. This does mean that you have a device-specific u-boot even if you have a generic kernel, but that's typically true anyway.
[2] Patches have been posted for ARM UEFI support. They're not mergeable in their current form, but they should be in the near future. ACPI support is in development.
[3] Although not all u-boots are created equal - some vendors ship versions that will only boot off FAT, some vendors ship versions that will only boot off ext2. Having to special case this stuff in your installer is a pain.

comment count unavailable comments

28 June 2013

Justus Winter: Getting into the mood - my second week

tl; dr version: shutdown works, Debian GNU/Hurd booting using sysvinit:
Loading GNU Mach ...
Loading the Hurd ...
GNU Mach 1.3.99-486
AT386 boot: physical memory map from 0x0 to 0x9f400
AT386 boot: physical memory map from 0x100000 to 0x1fffe000
AT386 boot: physical memory from 0x0 to 0x1fffe000
Enabling FXSR
pcibios_init : BIOS32 Service Directory structure at 0xfcff0
pcibios_init : BIOS32 Service Directory entry at 0xfc7ba
pcibios_init : PCI BIOS revision 2.10 entry at 0xfc78c
Probing PCI hardware.
ide: Intel 82371 PIIX3 (dual FIFO) DMA Bus Mastering IDE
    Controller on PCI bus 0 function 9
ide: BM-DMA feature is not enabled (BIOS), enabling
    ide0: BM-DMA at 0xc100-0xc107
    ide1: BM-DMA at 0xc108-0xc10f
hd0: got CHS=762/128/63 CTL=c8 from BIOS
hd0: QEMU HARDDISK, 3001MB w/256kB Cache, CHS=762/128/63, DMA
hd2: QEMU DVD-ROM, ATAPI CDROM drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
Floppy drive(s): fd0 is 1.44M
FDC 0 is a S82078B
probing scsi 5/22: Adapteco1542FInvalidiaddresseforishpnt.with 1542.
Invalid address for shpnt with 1542.
probing scsi 18/22:AWesternxDigitalAWD-7000nFailed9initialization of WD-7000 SCSI card!
probing eata on/1f0EATA0:Daddress 0x1f04in3use, skipping probe.
probing eata on 170EATA0: address 0x170 in use, skipping probe.
probing scsi 21/22: Iomega7parport ZIP drive ppa: Version 1.42
ppa: Probing port 03bc
ppa: Probing port 0378
ppa:         SPP port present
ppa:         PS/2 bidirectional port present
ppa: Probing port 0278
done
scsi : 0 hosts.
scsi : detected total.
Partition check (DOS partitions):
 hd0: hd0s1 hd0s2 < hd0s5 >
lpr0: at atbus0, port = 378, spl = 6, pic = 7.
2omultibootxmodulesd/execl$(exec-task=task-create)ne=$ kernel-command-line  --host-priv-port=$ host-port  --device-master-port=$ device-port  --exec-server-task=$ exec-task  -T typed $ root  $(task-create) $(task-resume)               task loaded: ext2fs --readonly --multiboot-command-line=root=device:hd0s1 console=com0 --host-priv-port=1 --device-master-port=2 --exec-server-task=3 -T typed device:hd0s1
task loaded: exec /hurd/exec
start ext2fs: Hurd server bootstrap: ext2fs[device:hd0s1] exec init proc auth
INIT: version 2.88 booting
Using makefile-style concurrent boot in runlevel S.
Activating swap...done.
Checking root file system...fsck from util-linux 2.20.1
hd2 : tray open or drive not ready
hd2 : tray open or drive not ready
hd2 : tray open or drive not ready
hd2 : tray open or drive not ready
end_request: I/O error, dev 02:00, sector 0
ext2fs_check_if_mount: Can't check if filesystem is mounted due to missing mtab file while determining whether /dev/hd0s1 is mounted.
/dev/hd0s1: clean, 43930/181056 files, 281248/723200 blocks
done.
Cleaning up temporary files... /tmp.
/etc/rcS.d/S06mtab.sh: 48: /etc/rcS.d/S06mtab.sh: cannot open /proc/mounts: No such file
Activating lvm and md swap...(default pager): Already paging to partition hd0s5!
done.
Checking file systems...fsck from util-linux 2.20.1
hd2 : tray open or drive not ready
hd2 : tray open or drive not ready
end_request: I/O error, dev 02:00, sector 0
done.
Mounting local filesystems...mount: invalid option -- 'O'
Try  mount --help' or  mount --usage' for more information.
failed.
Activating swapfile swap...(default pager): Already paging to partition hd0s5!
done.
df: Warning: cannot read table of mounted file systems: No such file or directory
Cleaning up temporary files....
Configuring network interfaces...inetutils-ifconfig: invalid arguments
ifup: failed to open pid file /run/network/ifup-/dev/eth0.pid: No such file or directory
Internet Systems Consortium DHCP Client 4.2.2
Copyright 2004-2011 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
can't create /var/lib/dhcp/dhclient./dev/eth0.leases: No such file or directory
Listening on Socket//dev/eth0
Sending on   Socket//dev/eth0
DHCPDISCOVER on /dev/eth0 to 255.255.255.255 port 67 interval 6
DHCPREQUEST on /dev/eth0 to 255.255.255.255 port 67
DHCPOFFER from 10.0.2.2
DHCPACK from 10.0.2.2
can't create /var/lib/dhcp/dhclient./dev/eth0.leases: No such file or directory
bound to 10.0.2.15 -- renewal in 38544 seconds.
done.
Cleaning up temporary files....
Setting up X socket directories... /tmp/.X11-unix /tmp/.ICE-unix.
INIT: Entering runlevel: 2
Using makefile-style concurrent boot in runlevel 2.
Starting enhanced syslogd: rsyslogd.
Starting deferred execution scheduler: atd.
Starting periodic command scheduler: cron.
Starting system message bus: dbusFailed to set socket option"/var/run/dbus/system_bus_socket": Protocol not available.
Starting OpenBSD Secure Shell server: sshd.
GNU 0.3 (debian) (console)
login:
This is the result of: This yields a surprisingly functional Debian/Hurd system with sysvinit. There are still lot's of loose ends, I populated my status page with them. I spent quite some time in #hurd on freenode and Pino Toscano offered to send me some old notes of his about the sysvinit issue. This turned out to be most helpful as his notes were about plugging sysvinits init into the boot process. This was the next logical thing to do for me, so first thing Monday morning I just put his runsystem.sysv file with some very minor tweaks (my version) into my overlay and had a working system in front of my eyes. That certainly was a nice way to start a week of work :), so many thanks again Pino! For your convenience I've created an overlay with all necessary changes and recompiled binaries. Be careful though, you might want to backup /sbin/reboot first or you won't be able to reboot your system! Of course my tool does that for you, and I've managed to create a reasonably small (253 mb) image thanks to the zerofree tool, the process of compacting images is implemented in hurdtest as the compact subcommand. I also added sysvinit specific tests. So if you want to try hurdtest, download the image and unxz it, my tool expects the image to be in the current working directory and be named image.qcow2. Download and untar an overlay and execute:
% hurdtest -logfile log -sysvinit overlay path/to/second-week-overlay
If you specify -logfile you can execute tail -f path/to/log to see what the VM is printing to the console in real time. The sysvinit specific tests demonstrate that indeed sysvinit is managing the system and that e. g. init 0 or shutdown -r now works. There are at least three major problems that I will need to tackle for this project: I think the last issue is what I'm going to look at next week. There's information about this in the Hurd wiki and I've spoken with Richard Braun about it and he drilled me to think about the hurdish way to solve this. I think I've got a pretty good picture of how to implement this in my head and am eager to give it a go. This will involve adding a function to Hurds filesystem interface that allows one to query an active translator about his mount options and "source" and about all active and passive translators that are attached to one of its nodes. This is where the fun actually begins. This is changing an integral part of an operating system, but this is okay since it is just another program written in c, operating in userspace. But hey I know that, that's just c and if something goes wrong (and it always does) I fire up gdb and have a look. And mind you, I'm running all this as an unprivileged user, no need to jeopardize the entire system for that. Besides implementing /proc/mounts I will also add -O, --test-opts to mount, and -v, --verbose to swapo n,ff . PS: My blog is still not properly registered to planet.d.o. I asked planet@ about that, but got no response. Any help? I never imagined blogging being so stressful, I care about losing potential readers :/

5 March 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: My Debian freeze experience (so far)

This is the first freeze in which I'm involved with upload rights. And it turned out to be a quite interesting ride so far, so I thought it would be nice to write about it.

As some of you may know, I'm a part of the Qt/KDE team. Before the freeze I was mostly involved in leaf packages, with some patch here or there, nothing fancy. And then the freeze came...

Bugs in Qt

...and bugs appeared in Qt. But they didn't get solved, even if the patches were there. Due to personal reasons, the manpower in Qt/KDE land decreased below normal levels (which were already low).

I took the time to review them, apply them in a local branch, build and test the fixes. I did a Qt upload before, but it was a team-consented one. This time there was not much reaction in our IRC channel as it used to be, so I was doubting if going ahead or not. I asked Ana, my great friend and former sponsor, for an opinion on the subject, and she gave me a really important advice: the patches were looking good and there is one really big true: if something get's broken, it can be fixed with a later upload.

You might be asking yourself why I was that afraid of doing the upload. Well, when one maintains such a medular package for many users one has to be careful And I also got used that those "big ones" like Qt where normally handled by hand skilled people. Do not take me wrong here, it's not that those people where keeping them for themselves, it's knowing that one does not has the same skills nor experience as them.

But again, no one was able to upload and I had the chance and will to do another upload if needed, so off it went. That was Qt 4:4.8.2-2.

Then new experiences followed: asking a buildd maintainer for a giveback, asking the Release Team for an unblock (more on this later), etc. While sponsoring me, Ana gave me another excellent advice which I always keep in my mind:

You can't know **everything** about Debian.

And that also includes a not so technical skill: communicating with other teams. But finally we got this new version of Qt in testing. Cool :-)

Of course, new bugs appeared, and my lack of skills (and sometimes, time) where replaced by team work: Pino looking at patches and Sune contacting upstream. The eleven uploads that followed are a nice example of team work, even if I was the one who signed and did the uploads. Whoever uses Qt must know that these wonderful people (including those who are not so active nowadays like Modestas or Fathi) have done lots to bring the better to their users.Thank you guys!

Be careful, they might bite you back!

Coming back to the non-technical skills, sometimes you have to communicate with other teams in Debian. And each team is (naturally) a separate world: possibly different people, different goals, etc. Of course, we share the goal to make Debian the best experience we can, but we do not necessarily agree on the paths to achieve so.

During the freeze, there is a team that gets lots of pressure, and not by chance: the Release Team. They handle a very important task, which is to ride the freeze to get to a release. OK, that's what everyone knows. Now, one thing is knowing that and another is really understanding what does that means.

Of course I was in the first group. From the outside, communicating with the RT was a kind of "special art", and not an easy one. I have even been advised to not ask for more than one or two unblocks per weekend, as they might "bite me back". So I put on my flamesuit on and... launched reportbug release.debian.org.

Now I'm really happy to say that my experience was far from what I described above. And yes, I had the chance to even disagree on some stuff. But remember: non-technical skills, a.k.a. social skills. Once I started to know what was going on inside the RT (joining #debian-release was a big help for that) I learnt some nice tips to approach them. Please allow me to list some of them:

  • Remember: you are the maintainer of the package, they are like gatekeepers that are there to help us coordinate to do a release. But they don't maintain the code, you do that. So try to be verbose when needed, explain the changes and don't forget a nice diff. They need to understand what is going on: they can't read your mind.
  • They are human beings too: not everyday might be their best day (the same goes for you too!). And they are under the pressure of a release. Be patient, that finally pays off.
  • Does your changes seem not so clear? try to improve them.
  • The package has a lot of changes but you really feel they are needed? Try to explain that as good as you can.
  • Try to put yourself in their position: do we really want this? If in doubt, there is a nice way to know what they think: a pre-approval bug.
I want to make a stop in this last point. A pre-approval bug it's an unblock bug in which you edit the subject to add "pre-approval" in it. Easy, isn't it? It gives you the opportunity to know what the RT thinks before doing the upload. In other words: it gives you the chance to communicate and do things in the best possible way for all the parts involved.

I've have also seen pre-approval bugs that were really not needed. But to learn where the threshold of what can be directly uploaded and what deserves a pre-approval bug is you need to know the guidelines the RT gives you. Do you still have doubts? fire a pre-approval bug and try to be clear.

Of course, this are all fruits of my experience with the RT during this time. If the RT thinks different from what I'm writing here, please stand up: we are hear to listen to you and learn :-)

As a side note, I think I should file a wishlist bug to include the pre-approval bug option in reportbug. Yes, I'm lazy :-)

Summing up

Overall this was a very nice and positive experience. We are not done yet. Are we really done at some point? Let's hope not, because this is where the fun comes from :-)

16 February 2013

Luca Falavigna: The DPL Game

Playing the DPL Game: here are my nominations for the Fantastic Four: P.S. Sorry Joss, I m not your man ;)

26 October 2012

Russ Allbery: Review: Fables: Legends in Exile

Review: Fables: Legends in Exile, by Bill Willingham, et al.
Series: Fables #1
Penciller: Lan Medina
Inker: Steve Leialoha
Inker: Craig Hamilton
Colorist: Sherilyn van Valkenburgh
Letterer: Todd Klein
Publisher: Vertigo
Copyright: 2002
ISBN: 1-56389-942-6
Format: Graphic novel
Pages: 127
The characters of human fairy tales are all real. Real, but hidden: they have been driven from their ancestral homelands by some force referred to here only as the Adversary, and have fled to the mundane world to escape his notice. Specifically, New York, where those characters who can pass as human blend into the streets of New York City, and those who can't (referred to only in passing in this story arc apart from one pig) stay at a farm upstate. That's the brilliant premise of Fables, an ongoing comic series from Vertigo that's up to 121 issues plus numerous spinoffs as of this writing. This is the first trade paperback, collecting issues 1 through 5 plus a prose story. Legends in Exile is, in underlying form, a murder mystery. It opens with Jack reporting a murder to Bigby Wolf, who is what passes for law enforcement among the Fables. Rose Red's apartment has been discovered covered in blood, and her body is missing. "No more happily ever after" is written on the wall in blood. Jack, despite being the one to report the crime, is Rose's lover and an obvious suspect. So is Snow White, the effective leader of the Fables (King Cole is a figurehead) and Rose's estranged sister. And, perhaps coincidentally, Prince Charming, Snow White's philandering and stunningly narcissistic ex-husband, has just turned up in town again. The basic plot plays out like a typical murder mystery, complete with a grand reveal in the last chapter and a reconstruction of events. It's diverting but nothing special in itself; it's primarily a hook on which to hang an introduction to the major characters and an exploration of Fables politics and history. And that part is excellent. Like most fractured fairy tales, the characters are more complex and nuanced than they are allowed to be in the canonical stories; unlike most fractured fairy tales, nearly all of that development is allowed to happen after the original stories. The original stories happened, largely as we all remember them, but then the characters kept living and changing. Wolf and his relationship with one of the three pigs is one of the highlights, but I think the odious Price Charming is the best bit of characterization. Willingham takes the idea of a charming seducer and layers in the rest of the womanizing associations that come to mind, while retaining a slick surface charm that is hard not to like at some level. I'm not the best at critiquing art. I read graphic novels for the story, with the art as a nice bonus if it's fairly good. But I can say that Medina's pencils tell the story well, with a few larger panels that were quite impressive. I particularly liked Snow White's office, early in the collection. Detail is reasonably good throughout, there aren't too many panels featuring only talking heads, and Medina does a wonderful job with the pig. (Significant credit probably also goes to Steve Leialoha and Craig Hamilton, the inkers. I'm not a good enough art critic to distinguish between the quality of the pencil work and what was added by subsequent inking.) The largest problem for me with Legends in Exile is my standard problem with graphic novels and the reason I don't buy many of them: the cover price is $10 for what is effectively a short story that builds an interesting world. If one really loves art in stories, graphic novels may be a good value. If, like me, one mostly cares about the story and would be almost as happy with straight prose, the value becomes questionable given the brief length of the material. That said, Legends in Exile does add a quite good prose short story by Willingham, "A Wolf in the Fold," which in several ways is better than the main story of the collection. It tells the story of the original exile and the covenant that settled all past grudges (mostly) from the perspective of the wolf. Willingham does a great job here both striking a more mythic tone than the main story but still tying it back to the sarcastic and practical tone of the rest of the collection. Although the main story itself is not particularly memorable, the background and concept make this one of the better graphic novels I've read. I can see why it's won so many awards, and I'll probably keep reading, despite the high graphic novel price tag. Followed by Fables: Animal Farm. Rating: 7 out of 10

18 July 2012

Jon Dowland: Debian Days 4, 5 and 6 (round-up)

And we've frozen! How have we done on the musicbrainz stuff? I was really pleased to see Michael Biebl upload new sound-juicer (four bugs closed!), rhythmbox and gnome-sushi packages which move them up to musicbrainz5. I NMUed goobox (with the maintainer's permission) but sadly there was a build issue and the NMU was reversed with a second upload, so it's back to using libmusicbrainz3 in the freeze. that leaves two more packages. I haven't discussed or considered putting forward this 'transition' for freeze exceptions, but Pino Toscano raised a good point: upstream might not support the version 1 WS API (which libmusicbrainz3 depends on) throughout the wheezy lifetime. Some more research here is needed. What else? NMU of acpidump to resolve a trivial manpage error that I filed a patch for two years ago; the latest flactag upstream version; a new version of game-data-packager, incorporating stuff Joey Schmit patched last year, which fixes Heretic support and adds Hexen support; a new version of chocolate-doom. The last two are particularly interesting because I'm in Uploaders: despite having left the Games team last year: there have been no subsequent uploads before my latest ones. I removed myself from Uploaders: with these uploads, which technically means they are NMUs. What next? Typically I find the freeze time a hard one to work on Debian. I've never been very effective at chasing and resolving RC bugs. On the other hand, long, protracted freezes are demotivational for everyone. I'm going to try and focus on goals which hasten the release of wheezy and resist the urge to work on things that would not show up until wheezy+1.

7 July 2012

Robert Collins: Reprap driver pinouts

This is largely a memo-to-my-future self, but it may save some time for someone else facing what I was last weekend. I ve been putting together a Reprap recently, seeded by the purchase of a partially assembled one from someone local who was leaving town and didn t want to take it with them. One of the issues it had was that 2 of the stepstick driver boards it uses were burnt out, and in NZ there are no local suppliers that I could find. There is however a supplier of Easydriver driver boards, which are apparently compatible. (The Reprap electronics is a sanguinololu, which has a fitted strip that exactly matches stepstick (or pololu) driver boards. The Easydrivers are not physically compatible, but they should be pin compatible.. no? I mapped across all the pins carefully, and the only issues were: there are three GND s on the Easydriver vs 2 on the stepstick, and the PFD pin isn t exposed on the stepstick board so it can t be mapped across. I ended up with this mapping (I m not sure where pin 1 is *meant* to be on the stepstick, so I m starting with VMOT, the anti-clockwise corner pin on the same side as the 2B/2A/1A/1B pins, when looking down on an installed board pin 1, and going clockwise from there). Stepstick Easydriver VMOT M+
GND GND
2B B2
2A A2
1A A1
1B B1
VDD +5V
GND GND
Dir Dir
Step Step
Slp Slp
Rst Rst
Ms3 Nothing
Ms2 Ms2
Ms1 Ms1
En Enable But, when I tried to use this, the motor just jammed up solid. A bit of debugging and trial and error later and I figured it out. The right mapping for the motor pins: 2B B2
2A B1
1A A1
1B A2 Thats right, the two boards have chosen opposed elements for labelling of motors coils pins on the step stick 1/2 refers to the coil and A/B the two ends that need to have voltage put across them, on the easydriver A/B refer to the coil and 1/2 the two ends Super confusing, especially as I haven t been doing much electronics for oh, a decade or so. I m reminded very strongly of Rusty s scale of interface usability here.

19 April 2012

Rapha&#235;l Hertzog: People behind Debian: Samuel Thibault, working on accessibility and the Hurd

Samuel Thibault is a French guy like me, but it took years until we met. He tends to keep a low profile, even though he s doing lots of good work that deserves to be mentioned. He focuses on improving Debian s accessibility and contributes to the Hurd. Who said he s a dreamer? :-) Checkout his interview to have some news of Wheezy s status on those topics. Raphael: Who are you? Samuel: I am 30 years old, and live in Bordeaux, France. During the workday, I teach Computer Science (Architecture, Networking, Operating Systems, and Parallel Programming, roughly) at the University of Bordeaux, and conduct researches in heterogeneous parallel computing. During the evening, I play the drums and the trombone in various orchestra (harmonic/symphonic/banda/brass). During the night, I hack on whatever fun things I can find, mainly accessibility and the Hurd at the moment, but also miscellaneous bits such as the Linux console support. I am also involved in the development of Aquilenet, an associative ISP around Bordeaux, and getting involved in the development of the network infrastructure in Bordeaux. I am not practicing Judo any more, but I roller-skate to work, and I like hiking in the mountains. I also read quite a few mangas. Saturday mornings do not exist in my schedule (Sunday mornings do, it s Brass Band rehearsal :) ). Raphael: How did you start contributing to Debian? Samuel: Bit by bit. I have been hacking around GNU/Linux since around 1998. I installed my first Debian system around 2000, as a replacement for my old Mandrake installation (which after all my tinkering was actually no longer looking like a Mandrake system any more!). That was Potato at the time, which somebody offered me through a set of CDs (downloading packages over the Internet was unthinkable at the time with the old modems). I have been happily reading and hacking around documentation, source code, etc. provided on them. Contribution things really started to take off when I went to the ENS Lyon high school in 2001: broadband Internet access in one s own student room! Since sending a mail was then really free, I started submitting bugs against various packages I was using. Right after that I started submitting patches along them, and then patches to other bugs. I did that for a long time actually. I had very little knowledge of all packaging details at the time, I was just a happy hacker submitting reports and patches against the upstream source code. At ENS Lyon, I met a blind colleague with very similar hacking tastes (of course we got friends) and he proposed me, for our student project, to work on a brlnet project (now called brlapi), a client/server protocol that lets applications render text on braille devices themselves. Along the way, I got to learn in details how a blind person can use a Unix system and the principles that should be followed when developing Accessibility. That is how I got involved in it. We presented our project at JDLL, and the Hurd booth happened to be next to our table, so I discussed with the Hurd people there about how the Hurd console could be used through braille. That is how I got into the Hurd too. From then on, I progressively contributed more and more to the upstream parts of both accessibility software and the Hurd. And then to the packaging part of them. Through patches in bug reports first, as usual, as well as through discussions on the mailing lists. But quickly enough people gave me commit access so I could just throw the code in. I was also given control over the Hurd buildds to keep them running. It was all good at that stage: I could contribute in all the parts I was caring about. People however started telling me that I should just apply for being a Debian Developer; both from accessibility and Hurd sides. I had also seen a bunch of my friends going through the process. I was however a bit scared (or probably it was just an excuse) by having to manage a gpg key, it seemed like a quite dangerous tool to me (even if I already had commit access to glibc at the time anyway ). I eventually applied for DM in 2008 so as to at least be able to upload some packages to help the little manpower of the Accessibility and Hurd teams. Henceforth I had already a gpg key, thus no excuse any more. And having it in the DM keyring was not enough for e.g. signing the hurd-i386 buildd packages. So I ended up going through NM in 2009, which went very fast, since I had already been contributing to Debian and learning all the needed stuff for almost 10 years! I now have around 50 packages in my QA page, and being a DD is actually useful for my work, to easily push our software to the masses :) So to sum it up, the Debian project is very easy to contribute to and open to new people. It was used during discussions at the GNU Hackers Meeting 2011 as an example of a very open community with public mailing lists and discussions. The mere fact that anybody can take the initiative of manipulating the BTS (if not scared by the commands) without having to ask anybody is an excellent thing to welcome contributions; it is notable tha the GNU project migrated to the Debbugs BTS. More generally, I don t really see the DD status as a must, especially now that we have the DM status (which is still a very good way to drag people into becoming DDs). For instance, I gave a talk at FOSDEM 2008 about the state of accessibility in Debian. People did not care whom I was, they cared that there was important stuff going on and somebody talking about it. More generally, decisions that are made through a vote are actually very rare. Most of the time, things just happen on the mailing lists or IRC channels where anybody can join the discussion. So I would recommend beginners to first use the software, then start reporting bugs, then start digging in the software to try fix the bugs by oneself, eventually propose patches, get them reviewed. At some point the submitted patches will be correct already most of the time. That s when the maintainers will start getting bored of just applying the patches, and simply provide with commit access, and voil , one has become a main contributor. Raphael: You re one of the main contributors to the Debian GNU/Hurd port. What motivates you in this project? Samuel: As I mentioned above, I first got real contact with the Hurd from the accessibility point of view. That initially brought me into the Hurd console, which uses a flexible design and nice interfaces to interact with it. The Hurd driver for console accessibility is actually very straightforward, way simpler than the Windows or Linux drivers. That is what caught me initially. I have continued working on it for several reasons. First, the design is really interesting for users. There are many things that are natural in the Hurd while Linux is still struggling to achieve them, such as UID isolation, recently mentioned in LWN. What I really like in the Hurd is that it excels at providing users with the same features as the administrator s. For instance, I find it annoying that I still can not mount an ISO image that I build on e.g. ries.debian.org. Linux now has FUSE which is supposed to permit that, but I have never seen it enabled on an ssh-accessible machine, only on desktop machines, and usually just because the administrator happens to be the user of the machine (who could as well just have used sudo ) For me, it is actually Freedom #0 of Free Software: let the user run programs for any purpose, that is, combining things together all the possible ways, and not being prevented from doing some things just because the design does not permit to achieve them securely. I had the chance to give a Hurd talk to explain that at GHM 2011, whose main topic was extensibility , I called it GNU/Hurd AKA Extensibility from the Ground, because the design of the Hurd is basically meant for extensibility, and does not care whether it is done by root or a mere user. All the tools that root uses to build a GNU/Hurd system can be used by the user to build its own GNU/Hurd environment. That is guaranteed by the design itself: the libc asks for things not to the kernel, but to servers (called translators), which can be provided by root, or by the user. It is interesting to see that it is actually also tried with varying success in GNU/Linux, through gvfs or Plash. An example of things I love being able to do is: $ zgrep foo ~/ftp://cdn.debian.net/debian/dists/sid/main/Contents-*.gz On my Hurd box, the ~/ftp: directory is indeed actually served by an ftpfs translator, run under my user uid, which is thus completely harmless to the system. Secondly and not the least, the Hurd provides me with interesting yet not too hard challenges. LWN confirmed several times that the Linux kernel has become very difficult to significantly contribute to, so it is no real hacking fun any more. I have notably implemented TLS support in the Hurd and the Xen and 64bit support in the GNU Mach kernel used by the Hurd. All three were very interesting to do, but were already done for Linux (at least for all the architectures which I actually know a bit and own). It happens that both TLS and Xen hacking experience became actually useful later on: I implemented TLS in the threading library of our research team, and the Xen port was a quite interesting line on my CV for getting a postdoc position at XenSource :) Lastly, I would say that I am used to lost causes :) My work on accessibility is sometimes a real struggle, so the Hurd is almost a kind of relief. It is famous for his vapourware reputation anyway, and so it is fun to just try to contribute to it nevertheless. An interesting thing is that the opinion of people on the Hurd is often quite extreme, and only rarely neutral. Some will say it is pure vapourware, while others will say that it is the hope of humanity (yes we do see those coming to #hurd, and they are not always just trolls!). When I published a 0.401 version on 2011 April 1st, the comments of people were very diverse, and some even went as far as saying that it was horrible of us to make a joke about the promised software :) Raphael: The FTPmasters want to demote the Hurd port to the debian-ports.org archive if it doesn t manage a stable release with wheezy. We re now at 2 months of the freeze. How far are you from being releasable ? Samuel: Of course, I can not speak for the Debian Release team. The current progress is however encouraging. During Debconf11, Michael Banck and I discussed with a few Debian Release team members about the kind of goals that should be achieved, and we are near completion of that part. The Debian GNU/Hurd port can almost completely be installed from the official mirrors, using the standard Debian Installer. Some patches need some polishing, but others are just waiting for being uploaded Debian GNU/Hurd can start a graphical desktop and run office tools such as gnumeric, as well as the iceweasel graphical web browser, KDE applications thanks to Pino Toscano s care, and GNOME application thanks to Emilio Pozuelo Monfort s care. Of course, general textmode hacking with gcc/make/gdb/etc. just works smoothly. Thanks to recent work on ghc and ada by Svante Signell, the archive coverage has passed 76%. There was a concern about network board driver support: until recently, the GNU Mach kernel was indeed still using a glue layer to embed the Linux 2.2 or even 2.0 drivers (!). Finding a network board supported by such drivers had of course become a real challenge. Thanks to the GSoC work of Zheng Da, the DDE layer can now be used to embed Linux 2.6.32 drivers in userland translators, which was recently ACCEPTed into the archive, and thus brings way larger support for network boards. It also pushes yet more toward the Hurd design: network drivers as userland process rather than kernel modules. That said, the freeze itself is not the final deadline. Actually, freeze periods are rests for porters, because maintainers stop bringing newer upstream versions which of course break on peculiar architectures. That will probably be helpful to continue improving the archive coverage. Raphael: The kfreebsd port brought into light all the packages which were not portable between different kernels. Did that help the Hurd port or are the problems too different to expect any mutual benefit? Samuel: The two ports have clearly helped each other in many aspects. The hurd-i386 port is the only non-Linux one that has been kept working (at least basically) for the past decade. That helped to make sure that all tools (dpkg, apt, toolchain, etc.) were able to cope with non-Linux ports, and keep that odd-but-why-not goal around, and evidently-enough achievable. In return, the kFreeBSD port managed to show that it was actually releasable, at least as a technological preview, thus making an example. In the daily work, we have sometimes worked hand in hand. The recent porting efforts of the Debian Installer happened roughly at the same time. When fixing some piece of code for one, the switch-case would be left for the other. When some code could be reused by the other, a mail would be sent to advise doing so, etc. In the packaging effort, it also made a lot of difference that a non-Linux port is exposed as released architecture: people attempted by themselves to fix code that is Linuxish for no real reason. The presence of the kFreeBSD is however also sometimes a difficulty for the Hurd: in the discussions, it sometimes tends to become a target to be reached, even if the systems are not really comparable. I do not need to detail the long history of the FreeBSD kernel and the amount of people hacking on it, some of them full-time, while the Hurd has only a small handful of free-time hackers. The FreeBSD kernel stability has already seen long-term polishing, and a fair amount of the Debian software was actually already ported to the FreeBSD kernel, thanks to the big existing pure-FreeBSD hackerbase. These do not hold for the GNU/Hurd port, so the expectations should go along. Raphael: You re also very much involved in the Debian Accessibility team. What are the responsibilities of this team and what are you doing there? Samuel: As you would expect it, the Debian Accessibility team works on packaging accessibility-related packages, and helping users with them; I thus do both. But the goal is way beyond just that. Actual accessibility requires integration. Ideally enough, a blind user should be able to just come to a Debian desktop system, plug his braille device, or press a shortcut to enable speech synthesis, and just use the damn computer, without having to ask the administrator to install some oddly-named package and whatnot. Just like any sighted user would do. He should be able to diagnose why his system does not boot, and at worse be able to reinstall his computer all by himself (typically at 2am ). And that is hard to achieve, because it means discussing about integration by default of accessibility features. For instance, the Debian CD images now beep during at the boot menu. That is a precious feature that has been discussed between debian-boot and debian-accessibility for a few weeks before agreeing on how to do it without too much disturbance. Similarly, my proposition of installing the desktop accessibility engines has been discussed for some time before being commited. What was however surprisingly great is that when somebody brought the topic back for discussion, non-debian-accessibility people answered themselves. This is reassuring, because it means things can be done durably in Debian. On the installation side, our current status is that the stable Debian installer has a high contrast color theme, and several years ago, I have pushed toward making standard CD images automatically detect braille devices, which permits standalone installation. I have added to the Wheezy installer some software speech synthesis (which again brought discussion about size increase vs versatility etc.) for blind people who do not have a braille device. I find it interesting to work on such topic in Debian rather than another distribution, because Debian is an upstream for a lot of distributions. Hopefully they just inherit our accessibility work. It at least worked for the text installer of Ubuntu. Of course, the Accessibility team is looking for help, to maintain our current packages, but also introduce new packages from the TODO list or create some backports. One does not need to be an expert in accessibility: tools can usually be tested, at least basically, by anybody, without particular hardware (I do not own any, I contributed virtual ones to qemu). For new developments and ideas, it is strongly recommended to come and discuss on debian-accessibility, because it is easy to get on a wrong track that does not bring actual accessibility. We still have several goals to achieve: the closest one is to just fix the transition to gnome3, which has been quite bad for accessibility so far :/ On the longer run, we should ideally reach the scenario I have detailed above: desktop accessibility available and ready to be enabled easily by default. Raphael: What s the biggest problem of Debian? Samuel: Debian is famous for its heated debian-devel discussions. And some people eventually say this no fun any more . That is exemplified in a less extreme way in the debian-boot/accessibility discussions that I have mentioned above. Sometimes, one needs to have a real stubborn thick head to continue the discussion until finding a compromise that will be accepted for commit. That is a problem because people do not necessarily have so much patience, and will thus prefer to contribute to a project with easier acceptance. But it is also a quality: as I explained above, once it is there, it is apparently for good. The Ubuntu support of accessibility in its installer has been very diverse, in part due to quite changing codebase. The Debian Installer codebase is more in a convergence process. Its base will have almost not changed between squeeze and wheezy. That allowed the Debian Accessibility team to continue improving its accessibility support, and not have to re-do it. A wiki page explains how to test its accessibility features, and some non-debian-accessibility people do go through it. A problem I am much more frightened by is the manpower in some core teams. The Debian Installer, grub, glibc, Xorg, gcc, mozilla derivatives, When reading the changelogs of these, we essentially keep seeing the same very few names over and over. And when one core developer leaves, it is very often still the same names which appear again to do the work. It is hard to believe that there are a thousand DDs working on Debian. I fear that Debian does not manage to get people to work on core things. I often hear people saying that they do not even dare thinking about putting their hands inside Xorg, for instance. Xorg is complex, but it seems to me that it tends to be overrated, and a lot of people could actually help there, as well as all the teams mentioned above. And if nobody does it, who will? Raphael: Do you have wishes for Debian Wheezy? Samuel: That is an easy one :) Of course I wish that we manage to release the hurd-i386 port. I also wish that accessibility of gnome3 gets fixed enough to become usable again. The current state is worrying: so much has changed that the transition will be difficult for users already, the current bugs will clearly not help. I also hope to find the time to fix the qt-at-spi bridge, which should (at last!) bring complete KDE accessibility. Raphael: Is there someone in Debian that you admire for their contributions? Samuel: Given the concerns I expressed above, I admire all the people who do spend time on core packages, even when that is really not fun everyday. Just to alphabetically name a few people I have seen so often here and there in the areas I have touched in the last few years: Aur lien Jarno, Bastian Blank, Christian Perrier, Colin Watson, Cyril Brulebois, Frans Pop, J rg Jaspert, Joey Hess, Josselin Mouette, Julien Cristau, Matthias Klose, Mike Hommey, Otavio Salvador, Petr Salinger, Robert Millan, Steve Langasek. Man, so many things that each of them works on! Of course this list is biased towards the parts that I touched, but people working in others core areas also deserve the same admiration.
Thank you to Samuel for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Note that older interviews are indexed on wiki.debian.org/PeopleBehindDebian.

Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Google+, Twitter and Facebook.

3 comments Liked this article? Click here. My blog is Flattr-enabled.

30 September 2011

Axel Beckert: Fun facts from the UDD

After spotting an upload of mira, who in turn spotted an upload of abe (the package, not an upload by me aka abe@d.o), mira (mirabilos aka tg@d.o) noticed that there are Debian packages which have same name as some Debian Developers have as login name. Of course I noticed a long time ago that there is a Debian package with my login name abe . Another well-known Debian login and former package name is amaya. But since someone else came up with that thought, too, it was time for finding the definite answer to the question which are the DD login names which also exist as Debian package names. My first try was based on the list of trusted GnuPG keys:
$ apt-cache policy $(gpg --keyring /etc/apt/trusted.gpg --list-keys 2>/dev/null   \
                     grep @debian.org   \
        	     awk -F'[<@]' ' print $2 '   \
                     sort -u) 2>/dev/null   \
                   egrep -o '^[^ :]*'
alex
tor
ed
bam
ng
But this was not satisfying as my own name didn t show up and gpg also threw quite a lot of block reading errors (which is also the reason for redirecting STDERR). mira then had the idea of using the Ultimate Debian Database to answer this question more properly:
udd=> SELECT login, name FROM carnivore_login, carnivore_names
      WHERE carnivore_login.id=carnivore_names.id AND login IN
      (SELECT package AS login FROM packages, active_dds
       WHERE packages.package=active_dds.login UNION
       SELECT source AS name FROM sources, active_dds
       WHERE sources.source=active_dds.login)
      ORDER BY login;
 login                   name
-------+---------------------------------------
 abe     Axel Beckert
 alex    Alexander List
 alex    Alexander M. List  4402020774 9332554
 and     Andrea Veri
 ash     Albert Huang
 bam     Brian May
 ed      Ed Boraas
 ed      Ed G. Boraas [RSA Compatibility Key]
 ed      Ed G. Boraas [RSA]
 eric    Eric Dorland
 gq      Alexander GQ Gerasiov
 iml     Ian Maclaine-cross
 lunar   J r my Bobbio
 mako    Benjamin Hill
 mako    Benjamin Mako Hill
 mbr     Markus Braun
 mlt     Marcela Tiznado
 nas     Neil A. Schemenauer
 nas     Neil Schemenauer
 opal    Ola Lundkvist
 opal    Ola Lundqvist
 paco    Francisco Moya
 paul    Paul Slootman
 pino    Pino Toscano
 pyro    Brian Nelson
 stone   Fredrik Steen
(26 rows)
Interestingly tor (Tor Slettnes) is missing in this list, so it s not complete either At least I m quite sure that nobody maintains a package with his own login name as package name. :-) We also have no packages ending in -guest , so there s no chance that a package name matches an Alioth guest account either

13 September 2011

Vincent Sanders: Electricity is really just organized lightning.

I have recently been working on a project that requires a 12V supply. Ordinarily this is no problem my selection of bench supplies are generally more than a match for anything I throw at them.

My TS3022S Bench SupplyThis project however needed a little more "oomph" than usual, specifically 200W more. Funnily enough a precision variable output bench supply capable of supplying 20A are rare and *very* expensive beasties.

So we turn to a fixed output supply, after all I will want to run my project without hogging my bench supplies anyway. These can be bought from various electronics suppliers like Farnell from around the 50 mark and Chinese imports from Ebay sellers start around the 20 mark.

All very well and good but that is money I was not planning on spending and possibly a month of waiting for an already badly delayed project. So I decided to Convert an old ATX PSU into a 12V source. This is not a new idea and a quick search revealed many suitable guides online. I had a quick skim, decided I did understand the general idea and ploughed ahead.

Wikipedia has a very useful page on the ATX standard complete with pinout diagrams and colour codes. The pile of grey box ATX supplies available on my shelf was examined and one was helpfully labelled with a sticker proclaiming 22A@12V and we had a winner.

Opening the case of the donor 450W CIT branded supply revealed a mostly empty enclosure with the usual basic switching arrangement. I removed most of the wire loom aside from two of each output voltage (3.3V, 5V and 12V i figured the other voltages might be useful in future) and three commons, the 3.3V and 5V sense lines were also kept. Each of these pairs were cut to length and leads were wired to 4mm sockets.

The "PWR_EN" line was wired via a toggle switch to ground so the output can be switched on and off easily. The 5V standby and a 5V output line were wired to a green/red bi-colour LED (via 270 current limit resistors) to give indication that mains is present and when the output is on.

Holes were drilled for four 4mm sockets an indicator LED and a switch. The connectors and switches were all mounted in the PSU casework. I plugged it all in, put an 8.2 load resistor on the 5V line with an ammeter in line and a voltmeter across the 12V rail.

ATX bench power supply I turned the mains on and the LED lit up green (5V standby worked) and when I flicked the output switch the LED turned orange, the 12V line went to 12V and the expected 0.6A flowed through the load resistor.

Basically, Success!

I have since loaded the supply up to the 200W operating load and nothing unexpected has happened so I am happy. Seems converting an ATX PSU is a perfectly good way of getting a 200W 12V supply and I can recommend it for anyone as cheap as me willing to put an hour or so into such a project.

17 February 2011

Cyril Brulebois: Debian XSF News #5

Time for a fifth Debian XSF News issue!

7 February 2011

Ana Beatriz Guerrero Lopez: post and pre-release fun

The last weeks before a Debian release are usually boring with respect to working on new stuff since unstable is pretty much closed to development. Now that the release is finished, this fun is back \o/ I have been in an upload frenzy since yesterday night and I have updated some packages in unstable: KOffice[1] and KOffice-l10n 2.3.1, yakuake, rsibreak and tintin++. Another of my packages, kid3, got magically updated itself in unstable before I had time to look at it, the magic of having active co-maintainers. [1] Thanks for the gentle push to update this, Pino :) I also emailed one of my upstream maintainers to ask him about the KDE 4 version of his application and he will likely make a release soon. One of the goals for wheezy is remove completely KDE 3 and Qt 3. If you are maintaining a KDE 3 or Qt 3 based application, we are about to start annoying you about this! See http://wiki.debian.org/kdelibs4c2aRemoval and http://wiki.debian.org/qt3-x11-freeRemoval The last 2 weeks before the release I had some fun watching how the Squeeze release countdown banner I published at http://news.debian.net spread to a lot of websites, personal blogs, community sites, forums to news portals.
The traffic has been increasing little by little through the 2 weeks the banner has been online and currently is still moderately high, since people keeps retweeting the news item about the Debian release. While writing these lines, the banner has been served 449575 times to a total of 244975 unique IP addresses!

28 December 2010

Jose Luis Rivas Contreras: Follow-up[2]: pino-0.3+librest (and not cmake, at all)

So, after having troubles while trying to compile pino-0.3 it turns out is has nothing to do with cmake, nor with librest not shipping gir files but with vala bindings and incompability between those bindings from rest-0.6 and rest-0.7. So if I really want to compile pino-0.3 I have two options: use rest-0.6 or update rest.vapi for vala and send a patch for pino-0.3 to get a working code with rest-0.7 and it s new vala bindings But there s not enough free time for me this holidays so, maybe next time :-/ Anyway, thanks to all those who helped :)

Next.

Previous.